2 research outputs found

    Semantic modelling of common data elements for rare disease registries, and a prototype workflow for their deployment over registry data

    Get PDF
    BACKGROUND: The European Platform on Rare Disease Registration (EU RD Platform) aims to address the fragmentation of European rare disease (RD) patient data, scattered among hundreds of independent and non-coordinating registries, by establishing standards for integration and interoperability. The first practical output of this effort was a set of 16 Common Data Elements (CDEs) that should be implemented by all RD registries. Interoperability, however, requires decisions beyond data elements - including data models, formats, and semantics. Within the European Joint Programme on Rare Diseases (EJP RD), we aim to further the goals of the EU RD Platform by generating reusable RD semantic model templates that follow the FAIR Data Principles. RESULTS: Through a team-based iterative approach, we created semantically grounded models to represent each of the CDEs, using the SemanticScience Integrated Ontology as the core framework for representing the entities and their relationships. Within that framework, we mapped the concepts represented in the CDEs, and their possible values, into domain ontologies such as the Orphanet Rare Disease Ontology, Human Phenotype Ontology and National Cancer Institute Thesaurus. Finally, we created an exemplar, reusable ETL pipeline that we will be deploying over these non-coordinating data repositories to assist them in creating model-compliant FAIR data without requiring site-specific coding nor expertise in Linked Data or FAIR. CONCLUSIONS: Within the EJP RD project, we determined that creating reusable, expert-designed templates reduced or eliminated the requirement for our participating biomedical domain experts and rare disease data hosts to understand OWL semantics. This enabled them to publish highly expressive FAIR data using tools and approaches that were already familiar to them

    High Resolution Modelling of Traffic Emissions Using the Large Eddy Simulation Code Fluidity

    No full text
    The large eddy simulation (LES) code Fluidity was used to simulate the dispersion of NOx traffic emissions along a road in London. The traffic emissions were represented by moving volume sources, one for each vehicle, with time-varying emission rates. Traffic modelling software was used to generate the vehicle movement, while an instantaneous emissions model was used to calculate the NOx emissions at 1 s intervals. The traffic emissions were also modelled as a constant volume source along the length of the road for comparison. A validation of Fluidity against wind tunnel measurements is presented before a qualitative comparison of the LES concentrations with measured roadside concentrations. Fluidity showed an acceptable comparison with the wind tunnel data for velocities and turbulence intensities. The in-canyon tracer concentrations were found to be significantly different between the wind tunnel and Fluidity. This difference was explained by the very high sensitivity of the in-canyon tracer concentrations to the precise release location. Despite this, the comparison showed that Fluidity was able to provide a realistic representation of roadside concentration variations at high temporal resolution, which is not achieved when traffic emissions are modelled as a constant volume source or by Gaussian plume models
    corecore